10 research outputs found

    Accurate brain-age models for routine clinical MRI examinations

    Get PDF
    Convolutional neural networks (CNN) can accurately predict chronological age in healthy individuals from structural MRI brain scans. Potentially, these models could be applied during routine clinical examinations to detect deviations from healthy ageing, including early-stage neurodegeneration. This could have important implications for patient care, drug development, and optimising MRI data collection. However, existing brain-age models are typically optimised for scans which are not part of routine examinations (e.g., volumetric T1-weighted scans), generalise poorly (e.g., to data from different scanner vendors and hospitals etc.), or rely on computationally expensive pre-processing steps which limit real-time clinical utility. Here, we sought to develop a brain-age framework suitable for use during routine clinical head MRI examinations. Using a deep learning-based neuroradiology report classifier, we generated a dataset of 23,302 'radiologically normal for age' head MRI examinations from two large UK hospitals for model training and testing (age range = 18-95 years), and demonstrate fast (&lt; 5 seconds), accurate (mean absolute error [MAE] &lt; 4 years) age prediction from clinical-grade, minimally processed axial T2-weighted and axial diffusion-weighted scans, with generalisability between hospitals and scanner vendors (Δ MAE &lt; 1 year). The clinical relevance of these brain-age predictions was tested using 228 patients whose MRIs were reported independently by neuroradiologists as showing atrophy 'excessive for age'. These patients had systematically higher brain-predicted age than chronological age (mean predicted age difference = +5.89 years, 'radiologically normal for age' mean predicted age difference = +0.05 years, p &lt; 0.0001). Our brain-age framework demonstrates feasibility for use as a screening tool during routine hospital examinations to automatically detect older-appearing brains in real-time, with relevance for clinical decision-making and optimising patient pathways.</p

    Automated triaging of head MRI examinations using convolutional neural networks

    Get PDF
    The growing demand for head magnetic resonance imaging (MRI) examinations, along with a global shortage of radiologists, has led to an increase in the time taken to report head MRI scans around the world. For many neurological conditions, this delay can result in increased morbidity and mortality. An automated triaging tool could reduce reporting times for abnormal examinations by identifying abnormalities at the time of imaging and prioritizing the reporting of these scans. In this work, we present a convolutional neural network for detecting clinically-relevant abnormalities in T2\text{T}_2-weighted head MRI scans. Using a validated neuroradiology report classifier, we generated a labelled dataset of 43,754 scans from two large UK hospitals for model training, and demonstrate accurate classification (area under the receiver operating curve (AUC) = 0.943) on a test set of 800 scans labelled by a team of neuroradiologists. Importantly, when trained on scans from only a single hospital the model generalized to scans from the other hospital (Δ\DeltaAUC \leq 0.02). A simulation study demonstrated that our model would reduce the mean reporting time for abnormal examinations from 28 days to 14 days and from 9 days to 5 days at the two hospitals, demonstrating feasibility for use in a clinical triage environment.Comment: Accepted as an oral presentation at Medical Imaging with Deep Learning (MIDL) 202

    Labelling imaging datasets on the basis of neuroradiology reports: a validation study

    Get PDF
    Natural language processing (NLP) shows promise as a means to automate the labelling of hospital-scale neuroradiology magnetic resonance imaging (MRI) datasets for computer vision applications. To date, however, there has been no thorough investigation into the validity of this approach, including determining the accuracy of report labels compared to image labels as well as examining the performance of non-specialist labellers. In this work, we draw on the experience of a team of neuroradiologists who labelled over 5000 MRI neuroradiology reports as part of a project to build a dedicated deep learning-based neuroradiology report classifier. We show that, in our experience, assigning binary labels (i.e. normal vs abnormal) to images from reports alone is highly accurate. In contrast to the binary labels, however, the accuracy of more granular labelling is dependent on the category, and we highlight reasons for this discrepancy. We also show that downstream model performance is reduced when labelling of training reports is performed by a non-specialist. To allow other researchers to accelerate their research, we make our refined abnormality definitions and labelling rules available, as well as our easy-to-use radiology report labelling app which helps streamline this process

    What Role Can Process Mining Play in Recurrent Clinical Guidelines Issues? A Position Paper

    Get PDF
    [EN] In the age of Evidence-Based Medicine, Clinical Guidelines (CGs) are recognized to be an indispensable tool to support physicians in their daily clinical practice. Medical Informatics is expected to play a relevant role in facilitating diffusion and adoption of CGs. However, the past pioneering approaches, often fragmented in many disciplines, did not lead to solutions that are actually exploited in hospitals. Process Mining for Healthcare (PM4HC) is an emerging discipline gaining the interest of healthcare experts, and seems able to deal with many important issues in representing CGs. In this position paper, we briefly describe the story and the state-of-the-art of CGs, and the efforts and results of the past approaches of medical informatics. Then, we describe PM4HC, and we answer questions like how can PM4HC cope with this challenge? Which role does PM4HC play and which rules should be employed for the PM4HC scientific community?Gatta, R.; Vallati, M.; Fernández Llatas, C.; Martinez-Millana, A.; Orini, S.; Sacchi, L.; Lenkowicz, J.... (2020). What Role Can Process Mining Play in Recurrent Clinical Guidelines Issues? A Position Paper. International Journal of Environmental research and Public Health (Online). 17(18):1-19. https://doi.org/10.3390/ijerph17186616S1191718Guyatt, G. (1992). Evidence-Based Medicine. JAMA, 268(17), 2420. doi:10.1001/jama.1992.03490170092032Hripcsak, G., Ludemann, P., Pryor, T. A., Wigertz, O. B., & Clayton, P. D. (1994). Rationale for the Arden Syntax. Computers and Biomedical Research, 27(4), 291-324. doi:10.1006/cbmr.1994.1023Peleg, M. (2013). Computer-interpretable clinical guidelines: A methodological review. Journal of Biomedical Informatics, 46(4), 744-763. doi:10.1016/j.jbi.2013.06.009Van de Velde, S., Heselmans, A., Delvaux, N., Brandt, L., Marco-Ruiz, L., Spitaels, D., … Flottorp, S. (2018). A systematic review of trials evaluating success factors of interventions with computerised clinical decision support. Implementation Science, 13(1). doi:10.1186/s13012-018-0790-1Rawson, T. M., Moore, L. S. P., Hernandez, B., Charani, E., Castro-Sanchez, E., Herrero, P., … Holmes, A. H. (2017). A systematic review of clinical decision support systems for antimicrobial management: are we failing to investigate these interventions appropriately? Clinical Microbiology and Infection, 23(8), 524-532. doi:10.1016/j.cmi.2017.02.028Greenes, R. A., Bates, D. W., Kawamoto, K., Middleton, B., Osheroff, J., & Shahar, Y. (2018). Clinical decision support models and frameworks: Seeking to address research issues underlying implementation successes and failures. Journal of Biomedical Informatics, 78, 134-143. doi:10.1016/j.jbi.2017.12.005Garcia, C. dos S., Meincheim, A., Faria Junior, E. R., Dallagassa, M. R., Sato, D. M. V., Carvalho, D. R., … Scalabrin, E. E. (2019). Process mining techniques and applications – A systematic mapping study. Expert Systems with Applications, 133, 260-295. doi:10.1016/j.eswa.2019.05.003Bhargava, A., Kim, T., Quine, D. B., & Hauser, R. G. (2019). A 20-Year Evaluation of LOINC in the United States’ Largest Integrated Health System. Archives of Pathology & Laboratory Medicine, 144(4), 478-484. doi:10.5858/arpa.2019-0055-oaLee, D., de Keizer, N., Lau, F., & Cornet, R. (2014). Literature review of SNOMED CT use. Journal of the American Medical Informatics Association, 21(e1), e11-e19. doi:10.1136/amiajnl-2013-001636TROTTI, A., COLEVAS, A., SETSER, A., RUSCH, V., JAQUES, D., BUDACH, V., … COLEMAN, C. (2003). CTCAE v3.0: development of a comprehensive grading system for the adverse effects of cancer treatment. Seminars in Radiation Oncology, 13(3), 176-181. doi:10.1016/s1053-4296(03)00031-6Daniel, C., & Kalra, D. (2019). Clinical Research Informatics: Contributions from 2018. Yearbook of Medical Informatics, 28(01), 203-205. doi:10.1055/s-0039-1677921Marco-Ruiz, L., Pedrinaci, C., Maldonado, J. A., Panziera, L., Chen, R., & Bellika, J. G. (2016). Publication, discovery and interoperability of Clinical Decision Support Systems: A Linked Data approach. Journal of Biomedical Informatics, 62, 243-264. doi:10.1016/j.jbi.2016.07.011Marcos, C., González-Ferrer, A., Peleg, M., & Cavero, C. (2015). Solving the interoperability challenge of a distributed complex patient guidance system: a data integrator based on HL7’s Virtual Medical Record standard. Journal of the American Medical Informatics Association, 22(3), 587-599. doi:10.1093/jamia/ocv003Wulff, A., Haarbrandt, B., Tute, E., Marschollek, M., Beerbaum, P., & Jack, T. (2018). An interoperable clinical decision-support system for early detection of SIRS in pediatric intensive care using openEHR. Artificial Intelligence in Medicine, 89, 10-23. doi:10.1016/j.artmed.2018.04.012Chen, C., Chen, K., Hsu, C.-Y., Chiu, W.-T., & Li, Y.-C. (Jack). (2010). A guideline-based decision support for pharmacological treatment can improve the quality of hyperlipidemia management. Computer Methods and Programs in Biomedicine, 97(3), 280-285. doi:10.1016/j.cmpb.2009.12.004Anani, N., Mazya, M. V., Chen, R., Prazeres Moreira, T., Bill, O., Ahmed, N., … Koch, S. (2017). Applying openEHR’s Guideline Definition Language to the SITS international stroke treatment registry: a European retrospective observational study. BMC Medical Informatics and Decision Making, 17(1). doi:10.1186/s12911-016-0401-5Eddy, D. M. (1982). Clinical Policies and the Quality of Clinical Practice. New England Journal of Medicine, 307(6), 343-347. doi:10.1056/nejm198208053070604Guyatt, G. H. (1990). Then-of-1 Randomized Controlled Trial: Clinical Usefulness. Annals of Internal Medicine, 112(4), 293. doi:10.7326/0003-4819-112-4-293CHALMERS, I. (1993). The Cochrane Collaboration: Preparing, Maintaining, and Disseminating Systematic Reviews of the Effects of Health Care. Annals of the New York Academy of Sciences, 703(1 Doing More Go), 156-165. doi:10.1111/j.1749-6632.1993.tb26345.xWoolf, S. H., Grol, R., Hutchinson, A., Eccles, M., & Grimshaw, J. (1999). Clinical guidelines: Potential benefits, limitations, and harms of clinical guidelines. BMJ, 318(7182), 527-530. doi:10.1136/bmj.318.7182.527Grading quality of evidence and strength of recommendations. (2004). BMJ, 328(7454), 1490. doi:10.1136/bmj.328.7454.1490Guyatt, G. H., Oxman, A. D., Vist, G. E., Kunz, R., Falck-Ytter, Y., Alonso-Coello, P., & Schünemann, H. J. (2008). GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ, 336(7650), 924-926. doi:10.1136/bmj.39489.470347.adHill, J., Bullock, I., & Alderson, P. (2011). A Summary of the Methods That the National Clinical Guideline Centre Uses to Produce Clinical Guidelines for the National Institute for Health and Clinical Excellence. Annals of Internal Medicine, 154(11), 752. doi:10.7326/0003-4819-154-11-201106070-00007Qaseem, A. (2012). Guidelines International Network: Toward International Standards for Clinical Practice Guidelines. Annals of Internal Medicine, 156(7), 525. doi:10.7326/0003-4819-156-7-201204030-00009Rosenfeld, R. M., Nnacheta, L. C., & Corrigan, M. D. (2015). Clinical Consensus Statement Development Manual. Otolaryngology–Head and Neck Surgery, 153(2_suppl), S1-S14. doi:10.1177/0194599815601394De Boeck, K., Castellani, C., & Elborn, J. S. (2014). Medical consensus, guidelines, and position papers: A policy for the ECFS. Journal of Cystic Fibrosis, 13(5), 495-498. doi:10.1016/j.jcf.2014.06.012Clinical Practical Guidelineshttp://www.openclinical.org/guidelines.htmlHaynes, A. B., Weiser, T. G., Berry, W. R., Lipsitz, S. R., Breizat, A.-H. S., Dellinger, E. P., … Gawande, A. A. (2009). A Surgical Safety Checklist to Reduce Morbidity and Mortality in a Global Population. New England Journal of Medicine, 360(5), 491-499. doi:10.1056/nejmsa0810119Grigg, E. (2015). Smarter Clinical Checklists. Anesthesia & Analgesia, 121(2), 570-573. doi:10.1213/ane.0000000000000352Hales, B., Terblanche, M., Fowler, R., & Sibbald, W. (2007). Development of medical checklists for improved quality of patient care. International Journal for Quality in Health Care, 20(1), 22-30. doi:10.1093/intqhc/mzm062Greenfield, S. (2017). Clinical Practice Guidelines. JAMA, 317(6), 594. doi:10.1001/jama.2016.19969Vegting, I. L., van Beneden, M., Kramer, M. H. H., Thijs, A., Kostense, P. J., & Nanayakkara, P. W. B. (2012). How to save costs by reducing unnecessary testing: Lean thinking in clinical practice. European Journal of Internal Medicine, 23(1), 70-75. doi:10.1016/j.ejim.2011.07.003Drummond, M. (2016). Clinical Guidelines: A NICE Way to Introduce Cost-Effectiveness Considerations? Value in Health, 19(5), 525-530. doi:10.1016/j.jval.2016.04.020Prior, M., Guerin, M., & Grimmer-Somers, K. (2008). The effectiveness of clinical guideline implementation strategies - a synthesis of systematic review findings. Journal of Evaluation in Clinical Practice, 14(5), 888-897. doi:10.1111/j.1365-2753.2008.01014.xWatts, C. G., Dieng, M., Morton, R. L., Mann, G. J., Menzies, S. W., & Cust, A. E. (2014). Clinical practice guidelines for identification, screening and follow-up of individuals at high risk of primary cutaneous melanoma: a systematic review. British Journal of Dermatology, 172(1), 33-47. doi:10.1111/bjd.13403Woolf, S., Schünemann, H. J., Eccles, M. P., Grimshaw, J. M., & Shekelle, P. (2012). Developing clinical practice guidelines: types of evidence and outcomes; values and economics, synthesis, grading, and presentation and deriving recommendations. Implementation Science, 7(1). doi:10.1186/1748-5908-7-61Legido-Quigley, H., Panteli, D., Brusamento, S., Knai, C., Saliba, V., Turk, E., … Busse, R. (2012). Clinical guidelines in the European Union: Mapping the regulatory basis, development, quality control, implementation and evaluation across member states. Health Policy, 107(2-3), 146-156. doi:10.1016/j.healthpol.2012.08.004Rashidian, A., Eccles, M. P., & Russell, I. (2008). Falling on stony ground? A qualitative study of implementation of clinical guidelines’ prescribing recommendations in primary care. Health Policy, 85(2), 148-161. doi:10.1016/j.healthpol.2007.07.011Yang, W.-S., & Hwang, S.-Y. (2006). A process-mining framework for the detection of healthcare fraud and abuse. Expert Systems with Applications, 31(1), 56-68. doi:10.1016/j.eswa.2005.09.003Kose, I., Gokturk, M., & Kilic, K. (2015). An interactive machine-learning-based electronic fraud and abuse detection system in healthcare insurance. Applied Soft Computing, 36, 283-299. doi:10.1016/j.asoc.2015.07.018Pryor, T. A., Gardner, R. M., Clayton, P. D., & Warner, H. R. (1983). The HELP system. Journal of Medical Systems, 7(2), 87-102. doi:10.1007/bf00995116Shahar, Y., Miksch, S., & Johnson, P. (1998). The Asgaard project: a task-specific framework for the application and critiquing of time-oriented clinical guidelines. Artificial Intelligence in Medicine, 14(1-2), 29-51. doi:10.1016/s0933-3657(98)00015-3Boxwala, A. A., Peleg, M., Tu, S., Ogunyemi, O., Zeng, Q. T., Wang, D., … Shortliffe, E. H. (2004). GLIF3: a representation format for sharable computer-interpretable clinical practice guidelines. Journal of Biomedical Informatics, 37(3), 147-161. doi:10.1016/j.jbi.2004.04.002Terenziani, P., Molino, G., & Torchio, M. (2001). A modular approach for representing and executing clinical guidelines. Artificial Intelligence in Medicine, 23(3), 249-276. doi:10.1016/s0933-3657(01)00087-2Sutton, D. R., & Fox, J. (2003). The Syntax and Semantics of the PROformaGuideline Modeling Language. Journal of the American Medical Informatics Association, 10(5), 433-443. doi:10.1197/jamia.m1264Musen, M. A., Tu, S. W., Das, A. K., & Shahar, Y. (1996). EON: A Component-Based Approach to Automation of Protocol-Directed Therapy. Journal of the American Medical Informatics Association, 3(6), 367-388. doi:10.1136/jamia.1996.97084511Ciccarese, P., Caffi, E., Quaglini, S., & Stefanelli, M. (2005). Architectures and tools for innovative Health Information Systems: The Guide Project. International Journal of Medical Informatics, 74(7-8), 553-562. doi:10.1016/j.ijmedinf.2005.02.001Shiffman, R. N., & Greenes, R. A. (1994). Improving Clinical Guidelines with Logic and Decision-table Techniques. Medical Decision Making, 14(3), 245-254. doi:10.1177/0272989x9401400306Peleg, M., Tu, S., Bury, J., Ciccarese, P., Fox, J., Greenes, R. A., … Stefanelli, M. (2003). Comparing Computer-interpretable Guideline Models: A Case-study Approach. Journal of the American Medical Informatics Association, 10(1), 52-68. doi:10.1197/jamia.m1135Karadimas, H., Ebrahiminia, V., & Lepage, E. (2018). User-defined functions in the Arden Syntax: An extension proposal. Artificial Intelligence in Medicine, 92, 103-110. doi:10.1016/j.artmed.2015.11.003Peleg, M., Keren, S., & Denekamp, Y. (2008). Mapping computerized clinical guidelines to electronic medical records: Knowledge-data ontological mapper (KDOM). Journal of Biomedical Informatics, 41(1), 180-201. doi:10.1016/j.jbi.2007.05.003German, E., Leibowitz, A., & Shahar, Y. (2009). An architecture for linking medical decision-support applications to clinical databases and its evaluation. Journal of Biomedical Informatics, 42(2), 203-218. doi:10.1016/j.jbi.2008.10.007Marcos, M., Maldonado, J. A., Martínez-Salvador, B., Boscá, D., & Robles, M. (2013). Interoperability of clinical decision-support systems and electronic health records using archetypes: A case study in clinical trial eligibility. Journal of Biomedical Informatics, 46(4), 676-689. doi:10.1016/j.jbi.2013.05.004Marco-Ruiz, L., Moner, D., Maldonado, J. A., Kolstrup, N., & Bellika, J. G. (2015). Archetype-based data warehouse environment to enable the reuse of electronic health record data. International Journal of Medical Informatics, 84(9), 702-714. doi:10.1016/j.ijmedinf.2015.05.016Gooch, P., & Roudsari, A. (2011). Computerization of workflows, guidelines, and care pathways: a review of implementation challenges for process-oriented health information systems. Journal of the American Medical Informatics Association, 18(6), 738-748. doi:10.1136/amiajnl-2010-000033Quaglini, S., Stefanelli, M., Cavallini, A., Micieli, G., Fassino, C., & Mossa, C. (2000). Guideline-based careflow systems. Artificial Intelligence in Medicine, 20(1), 5-22. doi:10.1016/s0933-3657(00)00050-6Schadow, G., Russler, D. C., & McDonald, C. J. (2001). Conceptual alignment of electronic health record data with guideline and workflow knowledge. International Journal of Medical Informatics, 64(2-3), 259-274. doi:10.1016/s1386-5056(01)00196-4González-Ferrer, A., ten Teije, A., Fdez-Olivares, J., & Milian, K. (2013). Automated generation of patient-tailored electronic care pathways by translating computer-interpretable guidelines into hierarchical task networks. Artificial Intelligence in Medicine, 57(2), 91-109. doi:10.1016/j.artmed.2012.08.008Shabo, A., Parimbelli, E., Quaglini, S., Napolitano, C., & Peleg, M. (2016). Interplay between Clinical Guidelines and Organizational Workflow Systems. Methods of Information in Medicine, 55(06), 488-494. doi:10.3414/me16-01-0006Mulyar, N., van der Aalst, W. M. P., & Peleg, M. (2007). A Pattern-based Analysis of Clinical Computer-interpretable Guideline Modeling Languages. Journal of the American Medical Informatics Association, 14(6), 781-787. doi:10.1197/jamia.m2389Grando, M. A., Glasspool, D., & Fox, J. (2012). A formal approach to the analysis of clinical computer-interpretable guideline modeling languages. Artificial Intelligence in Medicine, 54(1), 1-13. doi:10.1016/j.artmed.2011.07.001Kaiser, K., & Marcos, M. (2015). Leveraging workflow control patterns in the domain of clinical practice guidelines. BMC Medical Informatics and Decision Making, 16(1). doi:10.1186/s12911-016-0253-zMartínez-Salvador, B., & Marcos, M. (2016). Supporting the Refinement of Clinical Process Models to Computer-Interpretable Guideline Models. Business & Information Systems Engineering, 58(5), 355-366. doi:10.1007/s12599-016-0443-3Decision Model and Notation Version 1.0https://www.omg.org/spec/DMN/1.0Ghasemi, M., & Amyot, D. (2016). Process mining in healthcare: a systematised literature review. International Journal of Electronic Healthcare, 9(1), 60. doi:10.1504/ijeh.2016.078745Rebuge, Á., & Ferreira, D. R. (2012). Business process analysis in healthcare environments: A methodology based on process mining. Information Systems, 37(2), 99-116. doi:10.1016/j.is.2011.01.003Rovani, M., Maggi, F. M., de Leoni, M., & van der Aalst, W. M. P. (2015). Declarative process mining in healthcare. Expert Systems with Applications, 42(23), 9236-9251. doi:10.1016/j.eswa.2015.07.040Rojas, E., Munoz-Gama, J., Sepúlveda, M., & Capurro, D. (2016). Process mining in healthcare: A literature review. Journal of Biomedical Informatics, 61, 224-236. doi:10.1016/j.jbi.2016.04.007Lenkowicz, J., Gatta, R., Masciocchi, C., Casà, C., Cellini, F., Damiani, A., … Valentini, V. (2018). Assessing the conformity to clinical guidelines in oncology. Management Decision, 56(10), 2172-2186. doi:10.1108/md-09-2017-0906Fernández-Llatas, C., Benedi, J.-M., García-Gómez, J., & Traver, V. (2013). Process Mining for Individualized Behavior Modeling Using Wireless Tracking in Nursing Homes. Sensors, 13(11), 15434-15451. doi:10.3390/s131115434Qu, G., Liu, Z., Cui, S., & Tang, J. (2013). Study on Self-Adaptive Clinical Pathway Decision Support System Based on Case-Based Reasoning. Frontier and Future Development of Information Technology in Medicine and Education, 969-978. doi:10.1007/978-94-007-7618-0_95Van de Velde, S., Roshanov, P., Kortteisto, T., Kunnamo, I., Aertgeerts, B., Vandvik, P. O., & Flottorp, S. (2015). Tailoring implementation strategies for evidence-based recommendations using computerised clinical decision support systems: protocol for the development of the GUIDES tools. Implementation Science, 11(1). doi:10.1186/s13012-016-0393-

    Determination of MRI diagnostic value in predicting pediatric brain tumors histological type

    No full text
    Objective. To determine the diagnostic value of magnetic resonance imaging examination in predicting children's brain tumor histological type on patients who underwent treatment in the Lithuanian University of Health Sciences hospital Kaunas clinics (LSMUL KK) in the last five years (2011-2016) and evaluate tumor-specific radiological features. Research tasks. 1) To compare the results of MRI with histological pediatric brain tumors examination results. 2) To evaluate the diagnostic value of MRI in predicting the histological type of pediatric brain tumors. 3) To evaluate the sensitivity and specificity of MRI in predicting the histological type of children's brain tumor. 4) To identify the most common radiological characteristics of pediatric brain tumors. Methods. In this study the MRI images of 52 children with pediatric brain tumors in age group from 0 to 17 years were evaluated. We attempted to outline MRI sensitivity, specificity, accuracy, positive and negative predictive values for the diagnosis of different types of pediatric tumors, comparing the MRI findings with the results of the histological examination of the tumor. In this study we also evaluated radiological characteristics of MRI for different pediatric brain tumor. Results. The sensitivity of MRI predicting different pediatric brain tumors ranged from 80% to 100%, specificity of 97.7% to 100%, the sensitivity of 96.15% to 100%, positive and negative predictive values were 80 % to 100% and from 96.55% to 100%. The correct MRI diagnosis determined for 88.4% patients respectively. 78.8% of all tumors strongly enhanced after contrast injection, almost 90% had absence or minimal perifocal edema respectively and there was no significant difference between tumors structure. Conclusions. 1) There was no significant difference comparing MRI examination results with histological findings of brain tumor biopsies. 2) MRI investigation allows to predict a histological type of pediatric brain tumor respectively. 3) MRI is a very sensitive and specific diagnostic method to predict the most common children's brain tumors 4) The most common radiological characteristics of pediatric brain tumors are absence or minimal perifocal edema, strong contrast enhancement and solid or cystic structure

    Computed tomographic features of the proximal petrous facial nerve canal in recurrent Bell's palsy

    No full text
    Abstract Objectives The primary objective was to determine whether the narrowest dimensions of the labyrinthine facial nerve (LFN) canal on the symptomatic side in patients with unilateral recurrent Bell's palsy (BP) differ from those on the contralateral side or in asymptomatic, age‐ and gender‐matched controls on computed tomography (CT). The secondary objectives were to assess the extent of bony covering at the geniculate ganglion and to record inter‐observer reliability of the CT measurements. Methods The dimensions of the LFN canal at its narrowest point perpendicular to the long axis and the extent of bony covering at the geniculate ganglion were assessed by two radiologists. Statistical analysis was performed using the Wilcoxon signed‐rank and Mann‐Whitney U tests (LFN canal dimensions) and the Chi‐squared test (bony covering at the geniculate ganglion). Inter‐observer reliability was evaluated using Intra‐Class Correlation (ICC) and Cohen's kappa. Results The study included 21 patients with unilateral recurrent BP and 21 asymptomatic controls. There was no significant difference in the narrowest dimensions of the ipsilateral LFN canal when compared to the contralateral side or controls (P = .43‐.94). Similarly, there was no significant difference in the extent of bony covering at the geniculate ganglion when compared to either group (P = .19‐.8). Good inter‐observer reliability was observed for LFN measurements (ICC = 0.75‐0.88) but not for the bony covering at the geniculate ganglion (Cohen's kappa = 0.53). Conclusion The narrowest dimensions of the LFN canal and the extent of bony covering at the geniculate ganglion do not differ in unilateral recurrent BP, casting doubt over their etiological significance. Level of Evidence Level IV

    Factors affecting the labelling accuracy of brain MRI studies relevant for deep learning abnormality detection

    No full text
    Unlocking the vast potential of deep learning-based computer vision classification systems necessitates large data sets for model training. Natural Language Processing (NLP)—involving automation of dataset labelling—represents a potential avenue to achieve this. However, many aspects of NLP for dataset labelling remain unvalidated. Expert radiologists manually labelled over 5,000 MRI head reports in order to develop a deep learning-based neuroradiology NLP report classifier. Our results demonstrate that binary labels (normal vs. abnormal) showed high rates of accuracy, even when only two MRI sequences (T2-weighted and those based on diffusion weighted imaging) were employed as opposed to all sequences in an examination. Meanwhile, the accuracy of more specific labelling for multiple disease categories was variable and dependent on the category. Finally, resultant model performance was shown to be dependent on the expertise of the original labeller, with worse performance seen with non-expert vs. expert labellers

    Automated Labelling using an Attention model for Radiology reports of MRI scans (ALARM)

    No full text
    Labelling large datasets for training high-capacity neural networks is a major obstacle to the development of deep learning-based medical imaging applications. Here we present a transformer-based network for magnetic resonance imaging (MRI) radiology report classification which automates this task by assigning image labels on the basis of free-text expert radiology reports. Our model's performance is comparable to that of an expert radiologist, and better than that of an expert physician, demonstrating the feasibility of this approach. We make code available online for researchers to label their own MRI datasets for medical imaging applications

    Deep learning to automate the labelling of head MRI datasets for computer vision applications

    Get PDF
    OBJECTIVES: The purpose of this study was to build a deep learning model to derive labels from neuroradiology reports and assign these to the corresponding examinations, overcoming a bottleneck to computer vision model development. METHODS: Reference-standard labels were generated by a team of neuroradiologists for model training and evaluation. Three thousand examinations were labelled for the presence or absence of any abnormality by manually scrutinising the corresponding radiology reports (‘reference-standard report labels’); a subset of these examinations (n = 250) were assigned ‘reference-standard image labels’ by interrogating the actual images. Separately, 2000 reports were labelled for the presence or absence of 7 specialised categories of abnormality (acute stroke, mass, atrophy, vascular abnormality, small vessel disease, white matter inflammation, encephalomalacia), with a subset of these examinations (n = 700) also assigned reference-standard image labels. A deep learning model was trained using labelled reports and validated in two ways: comparing predicted labels to (i) reference-standard report labels and (ii) reference-standard image labels. The area under the receiver operating characteristic curve (AUC-ROC) was used to quantify model performance. Accuracy, sensitivity, specificity, and F1 score were also calculated. RESULTS: Accurate classification (AUC-ROC > 0.95) was achieved for all categories when tested against reference-standard report labels. A drop in performance (ΔAUC-ROC > 0.02) was seen for three categories (atrophy, encephalomalacia, vascular) when tested against reference-standard image labels, highlighting discrepancies in the original reports. Once trained, the model assigned labels to 121,556 examinations in under 30 min. CONCLUSIONS: Our model accurately classifies head MRI examinations, enabling automated dataset labelling for downstream computer vision applications. KEY POINTS: • Deep learning is poised to revolutionise image recognition tasks in radiology; however, a barrier to clinical adoption is the difficulty of obtaining large labelled datasets for model training. • We demonstrate a deep learning model which can derive labels from neuroradiology reports and assign these to the corresponding examinations at scale, facilitating the development of downstream computer vision models. • We rigorously tested our model by comparing labels predicted on the basis of neuroradiology reports with two sets of reference-standard labels: (1) labels derived by manually scrutinising each radiology report and (2) labels derived by interrogating the actual images. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00330-021-08132-0
    corecore